Session 7-E

Network Economics

Conference
9:00 AM — 10:30 AM EDT
Local
Jul 9 Thu, 6:00 AM — 7:30 AM PDT

A Lightweight Auction Framework for Spectrum Allocation with Strong Security Guarantees

Ke Cheng (Xidian University, China); Liangmin Wang (Jiangsu University, China); Yulong Shen and Yangyang Liu (Xidian University, China); Yongzhi Wang (Park University, USA); Lele Zheng (Xidian University, Xi'an, Shaanxi, China)

1
Auction is an effective mechanism to distribute spectrum resources. Although many privacy-preserving auction schemes for spectrum allocation have been proposed, none of them is able to perform practical spectrum auctions while ensuring enough security for bidders' private information, such as geo-locations, bid values, and data access patterns. To address this problem, we propose SLISA, a lightweight auction framework which enables an efficient spectrum allocation without revealing anything but the auction outcome, i.e., the winning bidders and their clearing prices. We present contributions on two fronts. First, as a foundation of our design, we adopt a Shuffle-then-Compute strategy to build a series of secure sub-protocols based on lightweight cryptographic primitives (e.g., additive secret sharing and basic garbled circuits). Second, we improve an advanced spectrum auction mechanism to make it data-oblivious, such that data access patterns can be hidden. Meanwhile, the modified protocols adapt to our elaborate building blocks without affecting its validity and security. We formally prove the security of all protocols under a semi-honest adversary model, and demonstrate performance improvements compared with state-of-the-art works through extensive experiments.

Fair and Protected Profit Sharing for Data Trading in Pervasive Edge Computing Environments

Yaodong Huang, Yiming Zeng, Fan Ye and Yuanyuan Yang (Stony Brook University, USA)

1
Innovative edge devices (e.g., smartphones, IoT devices) are becoming much more pervasive in our daily lives. With powerful sensing and computing capabilities, users can generate massive amounts of data. A new business model has emerged where data producers can sell their data to consumers directly to make money. However, how to protect the profit of the data producer from rogue consumers that may resell without authorization remains challenging. In this paper, we propose a smart-contract based protocol to protect the profit of the data producer while allowing consumers to resell the data legitimately. The protocol ensures the revenue is shared with the data producer over authorized reselling, and detects any unauthorized reselling. We formulate a fair revenue sharing problem to maximize the profit of both the data producer and resellers. We formulate the problem into a two-stage Stackelberg game and determine a ratio to share the reselling revenue between the data producer and resellers. Extensive simulations show that with resellers, our mechanism can achieve up to 97.8% higher profit for the data producer and resellers.

Secure Balance Planning of Off-blockchain Payment Channel Networks

Peng Li and Toshiaki Miyazaki (The University of Aizu, Japan); Wanlei Zhou (University of Technology Sydney, Australia)

0
Off-blockchain payment channels can significantly improve blockchain scalability by enabling a large number of micro-payments between two blockchain nodes, without committing every single payment to the blockchain. Multiple payment channels form a payment network, so that two nodes without direct channel connection can still make payments. A critical challenge in payment network construction is to decide how many funds should be deposited into payment channels as initial balances, which seriously influences the performance of payment networks, but has been seldom studied by existing work. In this paper, we address this challenge by designing PnP, a balance planning service for payment networks. Given estimated payment demands among nodes, PnP can decide channel balances to satisfy these demands with a high probability. It does not rely on any trusted third-parties, and can provide strong protection from malicious attacks with low overhead. It obtains these benefits with two novel designs, the cryptographic sortition and the chance-constrained balance planning algorithm. Experimental results on a testbed of 30 nodes show that PnP can enable 30% more payments than other designs.

Travel with Your Mobile Data Plan: A Location-Flexible Data Service

Zhiyuan Wang (The Chinese University of Hong Kong, Hong Kong); Lin Gao (Harbin Institute of Technology (Shenzhen), China); Jianwei Huang (The Chinese University of Hong Kong, Hong Kong)

1
Mobile Network Operators (MNOs) provide wireless data services based on a tariff data plan with a month data cap. Traditionally, the data cap is only valid for domestic data consumption and users have to pay extra roaming fees for overseas data consumption. A recent location-flexible service allows the user to access the domestic data cap in overseas locations (by configuring location-flexibility with a daily fee). This paper studies the economic effect of the location-flexibility on the overseas market. The overseas market comprises users who travel overseas within the month, thus is monthly variant. Each user decides his joint flexibility configuration and data consumption (J-FCDC) every day. The user's J-FCDC problem is an on-line payoff maximization. We analyze its off-line problem (which is NP-hard) and design an on-line strategy with provable performance. Moreover, we propose a pricing policy for the location-flexible service without relying on the market statistic information. We find that the location-flexibility induces users to consume more data in low-valuation days, and the MNO benefits from stimulating users' data consumption through an appropriate pricing. Numerical results based on empirical data show that the location-flexibility improves the MNO's revenue by 18% and the users' payoffs by 12% on average.

Session Chair

Murat Yuksel (University of Central Florida)

Session 8-E

Load Balancing

Conference
11:00 AM — 12:30 PM EDT
Local
Jul 9 Thu, 8:00 AM — 9:30 AM PDT

Classification of Load Balancing in the Internet

Rafael Almeida and Italo Cunha (Universidade Federal de Minas Gerais, Brazil); Renata Teixeira (Inria, France); Darryl Veitch (University of Technology Sydney, Australia); Christophe Diot (Google, USA)

0
Recent advances in programmable data planes, software-defined networking, and even the adoption of IPv6, support novel, more complex load balancing strategies. We introduce the Multipath Classification Algorithm (MCA), which extends existing formalism and techniques to consider that load balancers may use arbitrary combinations of bits in the packet header for load balancing. We propose optimizations to reduce probing cost that are applicable to MCA and existing load balancing measurement techniques. Through large-scale measurement campaigns, we characterize and study the evolution of load balancing on the IPv4 and IPv6 Internet performing measurement campaigns with multiple transport protocols. Our results show that load balancing is more prevalent and that load balancing strategies are more mature than previous characterizations have found.

Offloading Dependent Tasks in Mobile Edge Computing with Service Caching

Gongming Zhao and Hongli Xu (University of Science and Technology of China, China); Yangming Zhao and Chunming Qiao (University at Buffalo, USA); Liusheng Huang (University of Science and Technology of China, China)

1
In Mobile Edge Computing (MEC) applications, many tasks require specific service support for execution and in addition, have a dependent order of execution among the tasks. However, previous works often ignore the impact of having limited services cached at the edge nodes on (dependent) task offloading, thus may lead to an infeasible offloading decision or a longer completion time. To bridge the gap, this paper studies how to efficiently offload dependent tasks to edge nodes with limited (and predetermined) service caching. We denote such a problem whose objective is to minimize the makespan by ODT-MM, and prove that there exists no constant approximation algorithm for this hard problem. Then, we design an efficient convex programming based algorithm (CP) to solve this problem. Moreover, we study a special case with a homogeneous MEC and propose a favorite successor based algorithm (FS) to solve this special case with a competitive ratio of O(1). Extensive simulation results using Google data traces show that our proposed algorithms can significantly reduce applications' completion time by about 27-51% compared with other alternatives.

One More Config is Enough: Saving (DC)TCP for High-speed Extremely Shallow-buffered Datacenters

Wei Bai (Microsoft Research Asia, China); Shuihai Hu (The Hong Kong University of Science and Technology, China); Kai Chen (Hong Kong University of Science and Technology, China); Kun Tan (Huawei, China); Yongqiang Xiong (Microsoft Research Asia, China)

1
The link speed in production datacenters is growing fast, from 1Gbps to 40Gbps or even 100Gbps. However, the buffer size of commodity switches increases slowly, e.g., from 4MB at 1Gbps to 16MB at 100Gbps, thus significantly outpaced by the link speed. In such extremely shallow-buffered networks, today's TCP/ECN solutions, such as DCTCP, suffer from either excessive packet loss or substantial throughput degradation.

To this end, we present BCC, a simple yet effective solution that requires just one more ECN config over prior solutions. BCC operates based on real-time global buffer utilization. When available buffer suffices, BCC delivers both high throughput and low packet loss rate as prior work; Once it gets insufficient, BCC automatically triggers the shared buffer ECN to prevent packet loss at the cost of sacrificing little throughput. BCC is readily deployable with commodity switches. We validate BCC's feasibility in a small 100G testbed and evaluate its performance using large-scale simulations. Our results show that BCC maintains low packet loss rate while slightly degrading throughput when the buffer becomes insufficient. For example, compared to current practice, BCC achieves up to 94.4% lower 99th percentile FCT for small flows while degrading FCT for large flows by up to 3%.

TINA: A Fair Inter-datacenter Transmission Mechanism with Deadline Guarantee

Xiaodong Dong (Tianjin University, China); Wenxin Li (Hong Kong University of Science & Technology, Hong Kong); Xiaobo Zhou and Keqiu Li (Tianjin University, China); Heng Qi (Dalian University of Technology, China)

0
Geographically distributed cloud is a promising technique to achieve high performance for service providers. For inter-datacenter transfers, deadline guarantee and fairness are the two most important requirements. On the one hand, to ensure more transfers finish before their deadlines, preemptive scheduling policies are widely used, leading to the transfer starvation problem and is hence unfair. On the other hand, to ensure fairness, inter-datacenter bandwidth is fairly shared among transfers with per-flow bandwidth allocation, which leads to deadline missing problem. A mechanism that achieves these two seemingly conflicting objectives simultaneously is still missing. In this paper, we propose TINA to schedule network transfers fairly while providing deadline guarantee. TINA allows each transfer to compete freely with each other for bandwidth. More specifically, each transfer is assigned a probability to indicate whether to transmit or not. We formulate the competition among the transfers as an El Farol game while keeping the traffic load under a threshold to avoid congestion. We then prove that the Nash Equilibrium is the optimal strategy and propose a light-weight algorithm to derive it. Finally, both simulations and testbed experiments results show that TINA achieves superior performance than state-of-art methods in terms of fairness and deadline guarantee rate.

Session Chair

Mingkui Wei (Sam Houston State University)

Session 9-E

Routing

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 9 Thu, 11:00 AM — 12:30 PM PDT

Cost Minimization in Multi-Path Communication under Throughput and Maximum Delay Constraints

Qingyu Liu and Haibo Zeng (Virginia Tech, USA); Minghua Chen (The City University of Hong Kong, Hong Kong); Lingjia Liu (Virginia Tech, USA)

0
We consider the scenario where a sender streams a flow at a fixed rate to a receiver across a multi-hop network. Transmission over a link incurs a cost and a delay, both of which are traffic-dependent. We study the problem of cost minimization in multi-path routing under a maximum delay constraint and a throughput requirement. The problem is important for leveraging the edge-cloud computing platform to support IoT applications, which are sensitive to three critical networking metrics, i.e., cost, maximum delay, and throughput. Our problem jointly considers the three metrics, while existing ones only account for one or two of them. Solving our problem is challenging, as (i) it is NP-complete even to find a feasible solution; (ii) directly extending existing solutions admits problem-dependent maximum delay violations that can be unbounded in certain instances. We design an approximation algorithm and an efficient heuristic. For each feasible instance, our approximation algorithm must achieve the optimal cost, violating constraints by constant ratios. Our heuristic can solve a large portion of feasible instances. The obtained solution must satisfy all constraints. We further characterize a condition under which the cost of our heuristic must be within a problem-dependent gap to the optimal.

Hop-by-Hop Multipath Routing: Choosing the Right Nexthop Set

Klaus Schneider and Beichuan Zhang (University of Arizona, USA); Lotfi Benmohamed (National Institute of Standards and Technology, USA)

0
The Internet can be made more efficient and robust with hop-by-hop multipath routing: Each router on the path can split packets between multiple nexthops in order to 1) avoid failed links and 2) reduce traffic on congested links. Before deciding how to split traffic, one first needs to decide which nexthops to allow at each step. In this paper, we investigate the requirements and trade-offs for making this choice. Most related work chooses the viable nexthops by applying the "Downward Criterion", i.e., only adding nexthops that lead closer to the destination; or more generally by creating a Directed Acyclic Graph (DAG) for each destination. We show that a DAG's nexthop options are necessarily limited, and that, by using certain links in both directions (per destination), we can add further nexthops while still avoiding loops. Our solution LFID (Loop- Free Inport-Dependent) routing, though having a slightly higher time complexity, leads to both a higher number of and shorter potential paths than related work. LFID thus protects against a higher percentage of single and multiple failures (or congestions) and comes close to the performance of arbitrary source routing.

Joint Power Routing and Current Scheduling in Multi-Relay Magnetic MIMO WPT System

Hao Zhou, Wenxiong Hua, Jialin Deng, Xiang Cui, Xiang-Yang Li and Panlong Yang (University of Science and Technology of China, China)

0
Magnetic resonant coupling wireless power transfer (MRC-WPT) enables convenient device-charging. When MIMO MRC-WPT system incorporated with multiple relay components, both relay \emph{On-Off} state (\ie, \emph{power routing}) and TX current (\ie, \emph{current scheduling}) could be adjusted for improving charging efficiency and distance. Previous approaches need the collaboration and feedback from the energy receiver (RX), achieved using side-channels, \eg, Bluetooth, which is time/energy-consuming. In this work we propose, design, and implement a multi-relay MIMO MRC-WPT system, and design an almost optimum joint optimization of power routing and current scheduling method, without relying on any feedback from RX. We carefully decompose the joint optimization problem into two subproblems without affecting the overall optimality of the combined solution. For current scheduling subproblem, we propose an almost-optimum RX-feedback independent solution. For power routing subproblem, we first design a greedy algorithm with \{\frac{1}{2}\} approximation ratio, and then design a DQN based method to further improve its effectiveness. We prototype our system and evaluate it with extensive experiments. Our results demonstrate the effectiveness of the proposed algorithms. The achieved power transfer efficiency (PTE) on average is \{3.2X\}, \{1.43X\}, \{1.34X\}, and \{7.3X} over the other four strategies: without relay, with nonadjustable relays, greed based, and shortest-path based ones.

Verifying Policy-based Routing at Internet Scale

Xiaozhe Shao and Lixin Gao (University of Massachusetts at Amherst, USA)

0
Routing policy configuration plays a crucial role in determining the path that network traffic takes to reach a destination. Network administrators/operators typically decide the routing policy for their networks/routers independently. The paths/routes resulted from these independently configured routing policies might not necessarily meet the intent of the network administrators/operators. Even the very basic network-wide properties of the routing policies such as reachability between a pair of nodes need to be verified.

In this paper, we propose a scheme that characterizes routing-policy verification problems into a Satisfiability Modulo Theories (SMT) problems. The key idea is to formulate the SMT model in a policy-aware manner so as to reduce/eliminate the mutual dependencies between variables as much as possible. Further, we reduce the size of the generated SMT model through pruning. We implement and evaluate the policy-aware model through an out-of-box SMT solver. The experimental results show that the policy-aware model can reduce the time it takes to perform verification by as much as 100x even under a modest topology size. It takes only a few minutes to answer a query for a topology containing tens of thousands of nodes.

Session Chair

Jie Wu (Temple University)

Session 10-E

Cloud Computing

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 9 Thu, 1:00 PM — 2:30 PM PDT

Enabling Live Migration of Containerized Applications Across Clouds

Thad Benjaponpitak, Meatasit Karakate and Kunwadee Sripanidkulchai (Chulalongkorn University, Thailand)

3
Live migration, the process of transferring a running application to a different physical location with minimal downtime, can provide many benefits desired by modern cloud-based systems. Furthermore, live migration across data centers between different cloud providers enables a new level of freedom for cloud users to move their workloads around for performance or business objectives without having to be tied down to any single provider. While this vision is not new, to-date, there are few solutions and proof-of-concepts that provide this capability. As containerized applications are gaining popularity, we focus on the design and implementation of live migration of containers across cloud providers. A proof of concept live migration service between Amazon Web Services, Google Cloud Platform, and Microsoft Azure is evaluated using a common web-based workload. Our implemented service is automated, includes pre-copy optimization, connection holding, traffic redirection, and support for multiple interdependent containers, applicable to a broad range of application use cases.

Online Placement of Virtual Machines with Prior Data

David Naori (Technion, Israel); Danny Raz (Nokia and Technion, Israel)

3
The cloud computing market has a wide variety of customers that deploy various applications from deep learning to classical web services. Each application may have different computing, memory and networking requirements, and each customer may be willing to pay a different price for the service. When a request for a VM arrives, the cloud provider decides online whether or not to serve it and which resources to allocate for this purpose. The goal is to maximize the revenue while obeying the constraints imposed by the limited physical infrastructure and its layout.

Although requests arrive online, cloud providers are not entirely in the dark; historical data is readily available, and may contain strong indications regarding future requests. Thus, standard theoretical models that assume the online player has no prior knowledge are inadequate. In this paper we adopt a recent theoretical model for the design and analysis of online algorithms that allows taking such historical data into account. We develop new competitive online algorithms for multidimensional resource allocation and analyze their guaranteed performance. Moreover, using extensive simulation over real data from Google and AWS, we show that our new approach yields much higher revenue to cloud providers than currently used heuristics.

PAM & PAL: Policy-Aware Virtual Machine Migration and Placement in Dynamic Cloud Data Centers

Hugo Flores and Vincent Tran (CSUDH, USA); Bin Tang (California State University Dominguez Hills, USA)

2
In this paper we focus on policy-aware data centers (PADCs), wherein virtual machine (VM) traffic traverses a sequence of middleboxes (MBs) for security and performance purposes, and study two new VM migration and placement problems. We first study PAM: \underline{p}olicy-\underline{a}ware virtual machine \underline{m}igration. Given an existing VM placement in the PADC, a data center policy that communicating VM pairs must satisfy, and dynamic traffic rates among VMs, the goal of PAM is to migrate VMs in order to minimize the total cost of migration and communication of all the VM pairs. We design optimal, approximation, and heuristic {\em policy-aware} VM migration algorithms. Then we study PAL: \underline{p}olicy-\underline{a}ware virtual machine p\underline{l}acement, and show that it is a special case of PAM. Further, by exploiting unique structures of VM placement in PADCs, we design new PAL algorithms that are more time efficient than while achieving the optimality and approximality as those in PAM. Our experiments show that i) VM migrations reduces total communication costs of VM pairs by 30%, ii) our PAM algorithms outperform {\em only} existing policy-aware VM migration scheme by 20-30%, and iii) our PAL algorithms outperform state-of-the-art VM placement algorithm that is oblivious to data center policies by 40-50%.

SplitCast: Optimizing Multicast Flows in Reconfigurable Datacenter Networks

Long Luo (University of Electronic Science and Technology of China, China); Klaus-Tycho Foerster and Stefan Schmid (University of Vienna, Austria); Hongfang Yu (University of Electronic Science and Technology of China, China)

2
Many modern cloud applications frequently generate multicast traffic, which is becoming one of the primary communication patterns in datacenters. Emerging reconfigurable datacenter technologies enable interesting new opportunities to support such multicast traffic in the physical layer: novel circuit switches offer high-performance inter-rack multicast capabilities. However, not much is known today about the algorithmic challenges introduced by this new technology.

This paper presents SplitCast, a preemptive multicast scheduling approach that fully exploits emerging physical-layer multicast capabilities to reduce flow times. SplitCast dynamically reconfigures the circuit switches to adapt to the multicast traffic, accounting for reconfiguration delays. In particular, SplitCast relies on simple single-hop routing and leverages flexibilities by supporting splittable multicast so that a transfer can already be delivered to just a subset of receivers when the circuit capacity is insufficient. Our evaluation results show that SplitCast can reduce flow times significantly compared to state-of-the-art solutions.

Session Chair

Sangtae Ha (University of Colorado Boulder)

Made with in Toronto · Privacy Policy · © 2021 Duetone Corp.